Goto

Collaborating Authors

 analog neural network


Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model

Yang, Jichang, Chen, Hegan, Chen, Jia, Wang, Songqi, Wang, Shaocong, Yu, Yifei, Chen, Xi, Wang, Bo, Zhang, Xinyuan, Cui, Binbin, Li, Yi, Lin, Ning, Xu, Meng, Li, Yi, Xu, Xiaoxin, Qi, Xiaojuan, Wang, Zhongrui, Zhang, Xumeng, Shang, Dashan, Wang, Han, Liu, Qi, Cheng, Kwang-Ting, Liu, Ming

arXiv.org Artificial Intelligence

Human brains image complicated scenes when reading a novel. Replicating this imagination is one of the ultimate goals of AI-Generated Content (AIGC). However, current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency. This deficiency is rooted in the difference between the brain and digital computers. Digital computers have physically separated storage and processing units, resulting in frequent data transfers during iterative calculations, incurring large time and energy overheads. This issue is further intensified by the conversion of inherently continuous and analog generation dynamics, which can be formulated by neural differential equations, into discrete and digital operations. Inspired by the brain, we propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion, employing emerging resistive memory. The integration of storage and computation within resistive memory synapses surmount the von Neumann bottleneck, benefiting the generative speed and energy efficiency. The closed-loop feedback integrator is time-continuous, analog, and compact, physically implementing an infinite-depth neural network. Moreover, the software-hardware co-design is intrinsically robust to analog noise. We experimentally validate our solution with 180 nm resistive memory in-memory computing macros. Demonstrating equivalent generative quality to the software baseline, our system achieved remarkable enhancements in generative speed for both unconditional and conditional generation tasks, by factors of 64.8 and 156.5, respectively. Moreover, it accomplished reductions in energy consumption by factors of 5.2 and 4.1. Our approach heralds a new horizon for hardware solutions in edge computing for generative AI applications.


Analog Neural Networks of Limited Precision I: Computing with Multilinear Threshold Functions

Neural Information Processing Systems

Experimental evidence has shown analog neural networks to be ex(cid:173) mely fault-tolerant; in particular. Analog neurons with limited precision essentially compute k-ary weighted multilinear threshold functions. The behaviour of k-ary neural networks is investi(cid:173) gated. There is no canonical set of threshold values for k 3. although they exist for binary and ternary neural networks. The weights can be made integers of only 0 «z k) log (z k » bits.


Analog Neural Networks as Decoders

Neural Information Processing Systems

We have previously demonstrated the use of a continuous Hopfield neural network as a K-Winner-Take-All (KWTA) network [Majani et al., 1989, Erlanson and Abu(cid:173) Mostafa, 1988}. Given an input of N real numbers, such a network will converge to a vector of K positive one components and (N - K) negative one components, with the positive positions indicating the K largest input components. In addition, we have shown that the () such vectors are the only stable states of the system. One application of the KWTA network is the analog decoding of error-correcting codes [Majani et al., 1989, Platt and Hopfield, 1986]. Here, a known set of vectors (the codewords) are transmitted over a noisy channel. At the receiver's end of the channel, the initial vector must be reconstructed from the noisy vector.


An Analog Neural Network Inspired by Fractal Block Coding

Neural Information Processing Systems

We consider the problem of decoding block coded data, using a physical dynamical system. We sketch out a decompression algorithm for fractal block codes and then show how to implement a recurrent neural network using physically simple but highly-nonlinear, analog circuit models of neurons and synapses. The nonlinear system has many fixed points, but we have at our disposal a procedure to choose the parameters in such a way that only one solution, the desired solution, is stable. As a partial proof of the concept, we present experimental data from a small system a 16-neuron analog CMOS chip fabricated in a 2m analog p-well process. This chip operates in the subthreshold regime and, for each choice of parameters, converges to a unique stable state.


Open Access Journals

#artificialintelligence

This is an open access article distributed under the terms of the Creative Commons Attribution Licese (http://creativecommons.org/licenses/by/4.0)


An Analog Neural Network Inspired by Fractal Block Coding

Pineda, Fernando J., Andreou, Andreas G.

Neural Information Processing Systems

We consider the problem of decoding block coded data, using a physical dynamical system. We sketch out a decompression algorithm for fractal block codes and then show how to implement a recurrent neural network using physically simple but highly-nonlinear, analog circuit models of neurons and synapses. The nonlinear system has many fixed points, but we have at our disposal a procedure to choose the parameters in such a way that only one solution, the desired solution, is stable. As a partial proof of the concept, we present experimental data from a small system a 16-neuron analog CMOS chip fabricated in a 2m analog p-well process. This chip operates in the subthreshold regime and, for each choice of parameters, converges to a unique stable state. Each state exhibits a qualitatively fractal shape.


An Analog Neural Network Inspired by Fractal Block Coding

Pineda, Fernando J., Andreou, Andreas G.

Neural Information Processing Systems

We consider the problem of decoding block coded data, using a physical dynamical system. We sketch out a decompression algorithm for fractal block codes and then show how to implement a recurrent neural network using physically simple but highly-nonlinear, analog circuit models of neurons and synapses. The nonlinear system has many fixed points, but we have at our disposal a procedure to choose the parameters in such a way that only one solution, the desired solution, is stable. As a partial proof of the concept, we present experimental data from a small system a 16-neuron analog CMOS chip fabricated in a 2m analog p-well process. This chip operates in the subthreshold regime and, for each choice of parameters, converges to a unique stable state. Each state exhibits a qualitatively fractal shape.


An Analog Neural Network Inspired by Fractal Block Coding

Pineda, Fernando J., Andreou, Andreas G.

Neural Information Processing Systems

We consider the problem of decoding block coded data, using a physical dynamical system. We sketch out a decompression algorithm for fractal block codes and then show how to implement a recurrent neural network using physically simple but highly-nonlinear, analog circuit models of neurons and synapses. The nonlinear system has many fixed points, but we have at our disposal a procedure to choose the parameters in such a way that only one solution, the desired solution, is stable. As a partial proof of the concept, we present experimental data from a small system a 16-neuron analog CMOS chip fabricated in a 2m analog p-well process. This chip operates in the subthreshold regime and, for each choice of parameters, converges to a unique stable state. Each state exhibits a qualitatively fractal shape.


Analog Neural Networks as Decoders

Erlanson, Ruth, Abu-Mostafa, Yaser

Neural Information Processing Systems

In turn, KWTA networks can be used as decoders of a class of nonlinear error-correcting codes. By interconnecting suchKWTA networks, we can construct decoders capable of decoding more powerful codes. We consider several families of interconnected KWTAnetworks, analyze their performance in terms of coding theory metrics, and consider the feasibility of embedding such networks in VLSI technologies.


Analog Neural Networks as Decoders

Erlanson, Ruth, Abu-Mostafa, Yaser

Neural Information Processing Systems

In turn, KWTA networks can be used as decoders of a class of nonlinear error-correcting codes. By interconnecting such KWTA networks, we can construct decoders capable of decoding more powerful codes. We consider several families of interconnected KWTA networks, analyze their performance in terms of coding theory metrics, and consider the feasibility of embedding such networks in VLSI technologies.